Medical systematic reviews typically require assessing all the documents retrieved by a search. The reason is two-fold: the task aims for ``total recall''; and documents retrieved using Boolean search are an unordered set, and thus it is unclear how an assessor could examine only a subset. Screening prioritisation is the process of ranking the (unordered) set of retrieved documents, allowing assessors to begin the downstream processes of the systematic review creation earlier, leading to earlier completion of the review, or even avoiding screening documents ranked least relevant. Screening prioritisation requires highly effective ranking methods. Pre-trained language models are state-of-the-art on many IR tasks but have yet to be applied to systematic review screening prioritisation. In this paper, we apply several pre-trained language models to the systematic review document ranking task, both directly and fine-tuned. An empirical analysis compares how effective neural methods compare to traditional methods for this task. We also investigate different types of document representations for neural methods and their impact on ranking performance. Our results show that BERT-based rankers outperform the current state-of-the-art screening prioritisation methods. However, BERT rankers and existing methods can actually be complementary, and thus, further improvements may be achieved if used in conjunction.
translated by 谷歌翻译
Entity Alignment (EA) aims to find equivalent entities between two Knowledge Graphs (KGs). While numerous neural EA models have been devised, they are mainly learned using labelled data only. In this work, we argue that different entities within one KG should have compatible counterparts in the other KG due to the potential dependencies among the entities. Making compatible predictions thus should be one of the goals of training an EA model along with fitting the labelled data: this aspect however is neglected in current methods. To power neural EA models with compatibility, we devise a training framework by addressing three problems: (1) how to measure the compatibility of an EA model; (2) how to inject the property of being compatible into an EA model; (3) how to optimise parameters of the compatibility model. Extensive experiments on widely-used datasets demonstrate the advantages of integrating compatibility within EA models. In fact, state-of-the-art neural EA models trained within our framework using just 5\% of the labelled data can achieve comparable effectiveness with supervised training using 20\% of the labelled data.
translated by 谷歌翻译
高质量的医学系统评价需要全面的文献搜索,以确保建议和结果足够可靠。确实,寻找相关的医学文献是构建系统评价的关键阶段,并且通常涉及域(医学研究人员)和搜索(信息专家)专家,以开发搜索查询。基于布尔逻辑,在这种情况下的查询非常复杂,包括标准化术语(例如,医学主题标题(网格)词库)的自由文本项和索引项,并且难以构建。特别是显示网格术语的使用可以提高搜索结果的质量。但是,确定正确的网格术语以在查询中包含很难:信息专家通常不熟悉网格数据库,并且不确定查询网格条款的适当性。自然地,网格术语的全部价值通常不会完全利用。本文研究了基于仅包含自由文本项的初始布尔查询提出网格术语的方法。在这种情况下,我们设计了基于语言模型的词汇和预训练的方法。这些方法有望自动识别高效的网格术语,以包含在系统的审查查询中。我们的研究对几种网格术语建议方法进行了经验评估。我们进一步对每种方法的网格项建议进行了广泛的分析,以及这些建议如何影响布尔查询的有效性。
translated by 谷歌翻译